Your browser doesn't support javascript.
Show: 20 | 50 | 100
Results 1 - 20 de 27
Filter
1.
Journal of Agriculture Food Systems and Community Development ; 12(2):185-200, 2022.
Article in English | Web of Science | ID: covidwho-20231374

ABSTRACT

Promoting local food systems is crucial to provid-ing a more viable economy, eco-friendly produc-tion, and equal opportunities for producers, con -sumers, and communities. Meat processors are critical to local meat producers and the meat supply chain. However, various barriers have restricted small-scale meat processors and challenged the lo -cal meat supply chain. Although local food systems have gained enormous scholarly attention, little at-tention has been devoted to specifically exploring the meat processing sector. This study investigated the characteristics and challenges of small-scale (<750 employees) and very-small-scale (<200 em-ployees) meat processors in Missouri. Twenty-six meat processors participated in an online survey through Qualtrics, a mail survey, or a structured phone interview between May 2021 and March 2022. We identified the characteristics and con-straints related to their businesses. The analysis re-vealed that 76% of meat processors perceived that their business was in better or much better condi-tion than before the COVID-19 pandemic, reflect-ing their adaptability to the disrupted meat supply chain. However, small-scale meat processing facili-ties were limited by the labor shortage, complicated regulations and high regulatory compliance costs, a lack of consistent supply, and limited access to tools and equipment. More integrated work is needed to aid smaller processors in positively im-pacting the local community and environment through locally sourced meat production. This study contains helpful implications for state-level policymaking, extension programs, and future re-search directions.

2.
20th IEEE International Conference on Embedded and Ubiquitous Computing, EUC 2022 ; : 17-22, 2022.
Article in English | Scopus | ID: covidwho-2319669

ABSTRACT

After the COVID-induced lock-downs, augmented/virtual reality turned from leisure to desired reality. Real-time 3D audio is a crucial enabler for these technologies. Nevertheless, systems offering object spatialization in 3D audio fall in two limited cases. They either require long-running pre-renders or involve powerful computing platforms. Furthermore, they mainly focus on active audio sources, while humans rely on the sound's interactions with passive obstructions to sense their environment. We propose a hardware co-processor for real-time 3D audio spatialization supporting passive obstructions. Our solution attains similar latency w.r.t. workstations while draining a tenth of the power, making it suitable for embedded applications. © 2022 IEEE.

3.
13th International Conference on Cloud Computing, Data Science and Engineering, Confluence 2023 ; : 250-255, 2023.
Article in English | Scopus | ID: covidwho-2277115

ABSTRACT

Pneumonia has been a concerning issue worldwide. This infectious disease has a higher mortality rate than Covid-19. More than two million individuals lost their lives in 2019 out of which almost 600,000 were infants less than 5 years of age. Globally, identification of the disease is done manually by radiologists, but this method is highly unreliable as its accuracy is not sufficiently good. With the evolution of computational resources, especially the computing power of GPUs, it has become possible to train very deep CNNs. This study involves a comparative analysis of neural networks for pneumonia recognition. The goal is to do binary image classification for pneumonia recognition on each of the three models, namely, a Sequential model using TensorFlow (built from scratch), ResNet50 and InceptionV3 and comparing their efficiency, to discover which model suits best for smaller datasets and which suits best for larger datasets. Dataset consists of 5856 anterior and posterior Chest X-Ray images labeled as either Normal or Pneumonic. © 2023 IEEE.

4.
3rd International Conference on Data Science and Applications, ICDSA 2022 ; 552:301-312, 2023.
Article in English | Scopus | ID: covidwho-2268370

ABSTRACT

With the pandemic worldwide due to COVID-19, several detections and diagnostic methods have been in place. One of the standard modes of detection is computed tomography imaging. With the availability of computing resources and powerful GPUs, the analyses of extensive image data have been possible. Our proposed work initially deals with the classification of CT images as normal and infected images, and later, from the infected data, the images are classified based on their severity. The proposed work uses a 3D convolution neural network model to extract all the relevant features from the CT scan images. The results are also compared with the existing state-of-the-art algorithms. The proposed work is evaluated in accuracy, precision, recall, kappa value, and Intersection over Union. The model achieved an overall accuracy of 94.234% and a kappa value of 0.894. © 2023, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

5.
1st Combined International Workshop on Interactive Urgent Supercomputing, CIW-IUS 2022 ; : 1-9, 2022.
Article in English | Scopus | ID: covidwho-2265990

ABSTRACT

The COVID-19 pandemic has presented a clear and present need for urgent decision making. Set in an environment of uncertain and unreliable data, and a diverse range of possible interventions, there is an obvious need for integrating HPC into workflows that include model calibration, and the exploration of the decision space. In this paper, we present the design of PanSim, a portable, performant, and productive agent-based simulator, which has been extensively used to model and forecast the pandemic in Hungary. We show its performance and scalability on CPUs and GPUs, then we discuss the workflows PanSim integrates into. We describe the heterogeneous, resource-constrained HPC environment available to us, and formulate a scheduling optimisation problem, as well as heuristics to solve them, to either minimise the execution time of a given number of simulations or to maximise the number of simulations executed in a given time frame. © 2022 IEEE.

6.
Nuclear Instruments and Methods in Physics Research, Section A: Accelerators, Spectrometers, Detectors and Associated Equipment ; 1046, 2023.
Article in English | Scopus | ID: covidwho-2241361

ABSTRACT

The Alpha Magnetic Spectrometer (AMS) is constantly exposed to harsh condition on the ISS. As such, there is a need to constantly monitor and perform adjustments to ensure the AMS operates safely and efficiently. With the addition of the Upgraded Tracker Thermal Pump System, the legacy monitoring interface was no longer suitable for use. This paper describes the new AMS Monitoring Interface (AMI). The AMI is built with state-of-the-art time series database and analytics software. It uses a custom feeder program to process AMS Raw Data as time series data points, feeds them into InfluxDB databases, and uses Grafana as a visualization tool. It follows modern design principles, allowing client CPUs to handle the processing work, distributed creation of AMI dashboards, and up-to-date security protocols. In addition, it offers a more simple way of modifying the AMI and allows the use of APIs to automate backup and synchronization. The new AMI has been in use since January 2020 and was a crucial component in remote shift taking during the COVID-19 pandemic. © 2022 Elsevier B.V.

7.
IEEE Transactions on Parallel and Distributed Systems ; : 2015/01/01 00:00:00.000, 2023.
Article in English | Scopus | ID: covidwho-2232135

ABSTRACT

Simulation-based Inference (SBI) is a widely used set of algorithms to learn the parameters of complex scientific simulation models. While primarily run on CPUs in High-Performance Compute clusters, these algorithms have been shown to scale in performance when developed to be run on massively parallel architectures such as GPUs. While parallelizing existing SBI algorithms provides us with performance gains, this might not be the most efficient way to utilize the achieved parallelism. This work proposes a new parallelism-aware adaptation of an existing SBI method, namely approximate Bayesian computation with Sequential Monte Carlo(ABC-SMC). This new adaptation is designed to utilize the parallelism not only for performance gain, but also toward qualitative benefits in the learnt parameters. The key idea is to replace the notion of a single ‘step-size’hyperparameter, which governs how the state space of parameters is explored during learning, with step-sizes sampled from a tuned Beta distribution. This allows this new ABC-SMC algorithm to more efficiently explore the state-space of the parameters being learned. We test the effectiveness of the proposed algorithm to learn parameters for an epidemiology model running on a Tesla T4 GPU. Compared to the parallelized state-of-the-art SBI algorithm, we get similar quality results in <inline-formula><tex-math notation="LaTeX">$\sim 100 \times$</tex-math></inline-formula> fewer simulations and observe <inline-formula><tex-math notation="LaTeX">$\sim 80 \times$</tex-math></inline-formula> lower run-to-run variance across 10 independent trials. IEEE

8.
2022 IEEE Frontiers in Education Conference, FIE 2022 ; 2022-October, 2022.
Article in English | Scopus | ID: covidwho-2191741

ABSTRACT

This Research to Practice Work-In-Progress paper presents a virtualized breadboard solution for FPGAs and ARM microcontrollers in remote laboratories. The circumstances that rose amidst the COVID-19 pandemic demonstrated the vulnerability of current engineering education practices, particularly in dealing with hardware resources. Pivoting to the emergency online instruction presented challenges to the traditional practices in delivering hands-on engineering labs, which necessitated a solution that handles hardware prototyping without compromising creativity and instruction. One vital aspect of the embedded systems learning experience is ensuring students and faculty members alike have opportunities to learn and build custom prototyping circuits that interact with microprocessors on breadboards. In this paper, we build on the prior work that our group implemented on using virtualization to interface a virtual breadboard with physical hardware through web applications. Our previous work was limited to interfacing with one particular kind of hardware, designed to explore the capabilities of fundamental transducers and actuators that interface with hardware I/O pins. In hardware engineering practice, however, designers are not constrained by a single microprocessor selection to control their system and designs and are not limited by the type of transducers and actuators that provide the external circuit functionality. This paper presents a solution by scaling the existing virtual breadboard research to support FPGAs and ARM microcontrollers and intermediate logic gate integrated circuits for practical use in engineering curriculums. Providing this increased selection of supporting hardware helps facilitate student learning and simulates hardware development in an industrial setting. Due to the rising popularity of FPGAs and ARM microcontrollers in industry and in education, we expect that our solution will serve a larger audience through this broader selection of supported hardware. Our solution virtualizes the breadboard prototyping experience without sacrificing the nature of real-time embedded systems by taking the user prototyped inputs and outputs and directly programming the functionality of the surrounding system to physical hardware. This balance between a virtualized interface and physical hardware implementation preserves a hardware curriculum embedded systems engineering education and brings a promising solution to expand the scalability and accessibility of engineering labs. © 2022 IEEE.

9.
36th IEEE International Parallel and Distributed Processing Symposium Workshops, IPDPSW 2022 ; : 196-205, 2022.
Article in English | Scopus | ID: covidwho-2018897

ABSTRACT

Selective sweep detection carries theoretical significance and has several practical implications, from explaining the adaptive evolution of a species in an environment to understanding the emergence of viruses from animals, such as SARS-CoV-2, and their transmission from human to human. The plethora of available genomic data for population genetic analyses, however, poses various computational challenges to existing methods and tools, leading to prohibitively long analysis times. In this work, we accelerate LD (Linkage Disequilibrium) - based selective sweep detection using GPUs and FPGAs on personal computers and datacenter infrastructures. LD has been previously efficiently accelerated with both GPUs and FPGAs. However, LD alone cannot serve as an indicator of selective sweeps. Here, we complement previous research with dedicated accelerators for the ω statistic, which is a direct indicator of a selective sweep. We evaluate performance of our accelerator solutions for computing the w statistic and for a complete sweep detection method, as implemented by the open-source software OmegaPlus. In comparison with a single CPU core, the FPGA accelerator delivers up to 57.1× and 61.7× faster computation of the ω statistic and the complete sweep detection analysis, respectively. The respective attained speedups by the GPU-accelerated version of OmegaPlus are 2.9× and 12.9×. The GPU-accelerated implementation is available for download here: https://github.com/MrKzn/omegaplus.git. © 2022 IEEE.

10.
6th International Conference on Robotics and Automation Sciences, ICRAS 2022 ; : 47-51, 2022.
Article in English | Scopus | ID: covidwho-2018869

ABSTRACT

In the context of the new coronavirus epidemic, medical systems throughout the world has suffered tremendous pressure, the most intuitive problem is a shortage of human resources. In this regard, the 'intelligent drug delivery vehicle' puts forward a feasible scheme, which can replace manual work in a specific hospital area to complete the delivery of drugs. The system is based on STM32F103ZET6 core processor, controlling the OpenMV visual module to identify the hospital corridor information, and then through the pressure detection module, gray detection tracking module and angle sensing module information, the core processor controls the motor drive module to make the vehicle move. The system modifies the algorithm under the traditional NCC template matching algorithm, and uses the zoom image to reduce the pixels which improve the camera frame rate and recognition accuracy. At the same time, the Bluetooth communication module is installed to enable different vehicles to execute the drug delivery operations at the same time, therefore reducing manual work saving. © 2022 IEEE.

11.
22nd International Conference on Computational Science and Its Applications, ICCSA 2022 ; 13375 LNCS:412-427, 2022.
Article in English | Scopus | ID: covidwho-1971559

ABSTRACT

The coronavirus outbreak became a major concern for society worldwide. Technological innovation and ingenuity are essential to fight COVID-19 pandemic and bring us one step closer to overcome it. Researchers over the world are working actively to find available alternatives in different fields, such as the Healthcare System, pharmaceutic, health prevention, among others. With the rise of artificial intelligence (AI) in the last 10 years, IA-based applications have become the prevalent solution in different areas because of its higher capability, being now adopted to help combat against COVID-19. This work provides a fast detection system of COVID-19 characteristics in X-Ray images based on deep learning (DL) techniques. This system is available as a free web deployed service for fast patient classification, alleviating the high demand for standards method for COVID-19 diagnosis. It is constituted of two deep learning models, one to differentiate between X-Ray and non-X-Ray images based on Mobile-Net architecture, and another one to identify chest X-Ray images with characteristics of COVID-19 based on the DenseNet architecture. For real-time inference, it is provided a pair of dedicated GPUs, which reduce the computational time. The whole system can filter out non-chest X-Ray images, and detect whether the X-Ray presents characteristics of COVID-19, highlighting the most sensitive regions. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

12.
8th International Conference on Artificial Intelligence and Security , ICAIS 2022 ; 1586 CCIS:306-316, 2022.
Article in English | Scopus | ID: covidwho-1971397

ABSTRACT

With the development of Deep Learning, image recognition technology has been applied in many aspects. And convolutional neural networks have played a key role in realizing image recognition under the increasing computing power and massive data. However, if developers want to implement the training of convolutional neural networks and achieve the subsequent applications in scenarios such as personal computers, IoT devices, and embedded platforms with low Graphics Processing Units(GPUs) memory, a large number of parameters during training of convolutional neural networks is a great challenge. Therefore, this paper uses depthwise separable convolution to optimize the classic convolutional neural network model VGG-16 to solve this problem. And the VGG-16-JS model is proposed using the Inception structure dimensionality reduction and depthwise separable convolution on the VGG-16 convolutional neural network model. Finally, this paper compares the classification success rates of VGG-16 and VGG-16-JS for the application scenario of the COVID-19 mask-wearing. A series of reliable experimental data show that the improved VGG-16-JS model significantly reduces the number of parameters required for model training without a significant drop in the success rate. It solves the GPU memory requirements for training neural networks to a certain extent. © 2022, The Author(s), under exclusive license to Springer Nature Switzerland AG.

13.
IEEE Transactions on Emerging Topics in Computing ; : 1-12, 2022.
Article in English | Scopus | ID: covidwho-1961439

ABSTRACT

The social and economic impact of the COVID-19 pandemic demands a reduction of the time required to find a therapeutic cure. In this paper, we describe the EXSCALATE molecular docking platform capable to scale on an entire modern supercomputer for supporting extreme-scale virtual screening campaigns. Such virtual experiments can provide in short time information on which molecules to consider in the next stages of the drug discovery pipeline, and it is a key asset in case of a pandemic. The EXSCALATE platform has been designed to benefit from heterogeneous computation nodes and to reduce scaling issues. In particular, we maximized the accelerators’usage, minimized the communications between nodes, and aggregated the I/O requests to serve them more efficiently. Moreover, we balanced the computation across the nodes by designing an ad-hoc workflow based on the execution time prediction of each molecule. We deployed the platform on two HPC supercomputers, with a combined computational power of 81 PFLOPS, to evaluate the interaction between 70 billion of small molecules and 15 binding-sites of 12 viral proteins of SARS-CoV-2. The experiment lasted 60 hours and it performed more than one trillion ligand-pocket evaluations, setting a new record on the virtual screening scale. IEEE

14.
Mobile Information Systems ; 2022, 2022.
Article in English | Scopus | ID: covidwho-1950372

ABSTRACT

Coronavirus is a large family of viruses that affects humans and damages respiratory functions ranging from cold to more serious diseases such as ARDS and SARS. But the most recently discovered virus causes COVID-19. Isolation at home or hospital depends on one's health history and conditions. The prevailing disease that might get instigated due to the existence of the virus might lead to deterioration in health. Therefore, there is a need for early detection of the virus. Recently, many works are found to be observed with the deployment of techniques for the detection based on chest X-rays. In this work, a solution has been proposed that consists of a sample prototype of an AI-based Flask-driven web application framework that predicts the six different diseases including ARDS, bacteria, COVID-19, SARS, Streptococcus, and virus. Here, each category of X-ray images was placed under scrutiny and conducted training and testing using deep learning algorithms such as CNN, ResNet (with and without dropout), VGG16, and AlexNet to detect the status of X-rays. Recent FPGA design tools are compatible with software models in deep learning methods. FPGAs are suitable for deep learning algorithms to make the design as flexible, innovative, and hardware acceleration perspective. High-performance FPGA hardware is advantageous over GPUs. Looking forward, the device can efficiently integrate with the deep learning modules. FPGAs act as a challenging substitute podium where it bridges the gap between the architectures and power-related designs. FPGA is a better option for the implementation of algorithms. The design attains 121μW power and 89 ms delay. This was implemented in the FPGA environment and observed that it attains a reduced number of gate counts and low power. © 2022 Anupama Namburu et al.

15.
8th EAI International Conference on Industrial Networks and Intelligent Systems, INISCOM 2022 ; 444 LNICST:107-124, 2022.
Article in English | Scopus | ID: covidwho-1919727

ABSTRACT

In the modern age, the growth of embedded devices, IoT (Internet of Things), 5G (Fifth Generation) and AI (Artificial Intelligence), has driven edge AI applications. Adopting Edge computing for AI applications intends to deal with power consumption, network capacity, response latency issues. In this paper, we introduce an intelligent edge system. It aims to assist with managing and developing microservices based AI applications on embedded computers with limited hardware resource. The proposed system uses Docker/Containerd and lightweight Kubernetes cluster (K3s) for high availability, self-healing, load balancing, scaling and automated deployment. It also facilitates GPU (Graphics Processing Unit) to speed up AI applications. The centralized cluster management and monitoring features simplify clusters and services administration, especially on a large scale. Meanwhile, container registry and DevOps platform with built-in code repository and CI/CD (Continuous Integration/Continuous Delivery) offer continuous integration and delivery for AI applications running on the cluster. This improves the process of AI applications development and management at the edge. In this experience, we implement the face mask recognition application with the proposed system. This application engages the state-of-the-art and lightweight object detection models with deep learning, observing mask violations to contribute to reducing the spread of COVID-19 disease. © 2022, ICST Institute for Computer Sciences, Social Informatics and Telecommunications Engineering.

16.
International Journal of Parallel, Emergent and Distributed Systems ; 2022.
Article in English | Scopus | ID: covidwho-1900955

ABSTRACT

Field programmable gate arrays (FPGAs) have become widely prevalent in recent years as a great alternative to application-specific integrated circuits (ASIC) and as a potentially cheap alternative to expensive graphics processing units (GPUs). Introduced as a prototyping solution for ASIC, FPGAs are now widely popular in applications such as artificial intelligence (AI) and machine learning (ML) models that require processing data rapidly. As a relatively low-cost option to GPUs, FPGAs have the advantage of being reprogrammed to be used in almost any data-driven application. In this work, we propose an easily scalable and cost-effective cluster-based co-processing system using FPGAs for ML and AI applications that is easily reconfigured to the requirements of each user application. The aim is to introduce a clustering system of FPGA boards to improve the efficiency of the training component of machine learning algorithms. Our proposed configuration provides an opportunity to utilise relatively inexpensive FPGA development boards to produce a cluster without expert knowledge in VHDL, Verilog, or the system designs related to FPGA development. Consisting of two parts–a computer-based host application to control the cluster and an FPGA cluster connected through a high-speed Ethernet switch, allows the users to customise and adapt the system without much effort. The methods proposed in this paper provide the ability to utilise any FPGA board with an Ethernet port to be used as a part of the cluster and unboundedly scaled. To demonstrate the effectiveness of the proposed work, a two-part experiment to demonstrate the flexibility and portability of the proposed work–a homogeneous and heterogeneous cluster, was conducted with results compared against a desktop computer and combinations of FPGAs in two clusters. Data sets ranging from 60,000 to 14 million, including stroke prediction and covid-19, were used in conducting the experiments. Results suggest that the proposed system in this work performs close to 70% faster than a traditional computer with similar accuracy rates. © 2022 Informa UK Limited, trading as Taylor & Francis Group.

17.
ACM Journal on Emerging Technologies in Computing Systems ; 18(2), 2022.
Article in English | Scopus | ID: covidwho-1846548

ABSTRACT

Epidemiology models are central to understanding and controlling large-scale pandemics. Several epidemiology models require simulation-based inference such as Approximate Bayesian Computation (ABC) to fit their parameters to observations. ABC inference is highly amenable to efficient hardware acceleration. In this work, we develop parallel ABC inference of a stochastic epidemiology model for COVID-19. The statistical inference framework is implemented and compared on Intel's Xeon CPU, NVIDIA's Tesla V100 GPU, Google's V2 Tensor Processing Unit (TPU), and the Graphcore's Mk1 Intelligence Processing Unit (IPU), and the results are discussed in the context of their computational architectures. Results show that TPUs are 3×, GPUs are 4×, and IPUs are 30× faster than Xeon CPUs. Extensive performance analysis indicates that the difference between IPU and GPU can be attributed to higher communication bandwidth, closeness of memory to compute, and higher compute power in the IPU. The proposed framework scales across 16 IPUs, with scaling overhead not exceeding 8% for the experiments performed. We present an example of our framework in practice, performing inference on the epidemiology model across three countries and giving a brief overview of the results. © 2022 Association for Computing Machinery.

18.
Journal of Animal Science ; 99(Supplement_3):40-41, 2021.
Article in English | ProQuest Central | ID: covidwho-1831218

ABSTRACT

Meat shortages in many of the largest retail chains during the early months of the COVID-19 pandemic affected millions of U.S. consumers. In addition, wait times for custom slaughter of meat animals increased from days to weeks to upwards of 14 mon. Interruptions in livestock slaughter and meat supplies have renewed the emphasis on medium, small, and very small meat slaughters/processors. Numerous states are investing in slaughter/processing plant construction, renovation of existing plants, and establishing or reestablishing state inspection programs. It is conceivable that this reinvestment may alleviate some of the meat supply limitations;yet, there are a number of factors these plants need to address for economic sustainability, including (but are not limited to): consistency of local and regional livestock supply;availability of trained, experienced workforce;plant holding pens and slaughter floor design;pre-slaughter animal welfare training and compliance;development and implementation of food safety programs;fresh and frozen storage capacities;local and regional marketing channels and modes of distribution;and by-products markets and offal disposal. Regardless of plant size, the ultimate goal of all meat packers/processors is the production of consistent, readily available and affordable, high-quality meat and meat products;however, the traditional driving forces of price and taste are being slowly supplanted by consumers’ concerns about production practices and animal management, perceived nutritional benefits, animal welfare concerns, food locality, and conveniences. This presentation will attempt to amalgamate the challenges facing medium, small, and very small meat processors with consumers’ preferences in relation to the sustainability of these revitalized segments of the livestock and meats industry.

19.
Computer Applications in Engineering Education ; 2022.
Article in English | Scopus | ID: covidwho-1825891

ABSTRACT

Today, microcontrollers are of paramount importance in various aspects of life. They are used for design in many industrial fields from simple to highly complex devices. With a COVID-19 crisis going on, blending learning is the ideal solution for a post-pandemic society. This paper proposes a blended learning system as a solution to address today's problem in teaching microcontroller courses through collaboration between distance learning with the proposed training toolkit for real work. Implementation of the proposed solution began by constructing an inexpensive training kit (100$), to empower all students, even those in remote rural areas. The distance learning model allows the simulation of the proposed IoT projects electronically anywhere and at any time using the Proteus design suite, which helps students to conduct them before the actual laboratory appointment. Two learning models are programmed in assembly language which is directly related to the internal architecture of the microcontroller and provides access to all the real capabilities of its central processing unit. To get acquainted with all the features offered by the microcontroller integrated circuit, various IoT projects were constructed, each one dedicated to learning its architecture features, important to engineering students. The proposed IoT systems operate with a minimum consuming power that is very important for portable devices. Questionnaire questions for students were formulated to measure the proposed system benefit over three academic years. © 2022 Wiley Periodicals LLC.

20.
3rd International Conference on Communication, Devices and Computing, ICCDC 2021 ; 851:125-133, 2022.
Article in English | Scopus | ID: covidwho-1750655

ABSTRACT

In the ongoing COVID19 situation, one of the most basic yet necessary supplies for any human being is the face mask. Medical stores are facing shortage of face masks and it is also leading to crowding in confined spaces like medical stores hence aggravating the situation. The only solution to this is increasing the sources from where the citizens can get face masks and at the same time avoiding crowding and contact with any other human. The proposed Mask Vending Machine will make this happen. The physical machine that will store and vend the masks will have the Raspberry Pi as the central processing unit and the additional components like the steppers motors and monitor for display will be controlled by the Raspberry Pi. For payment and choice of quantity, an app has been designed. A QR code will be displayed on the monitor of the vending machine which has to be scanned with the app. Once scanned, it will ask the user for the number of masks needed and also facilitate the transaction process. Once successful, the masks will be vended. © 2022, The Author(s), under exclusive license to Springer Nature Singapore Pte Ltd.

SELECTION OF CITATIONS
SEARCH DETAIL